Goto

Collaborating Authors

 human capability


Effect of Adaptive Communication Support on Human-AI Collaboration

Liu, Shipeng, Shrutika, FNU, Zhang, Boshen, Huang, Zhehui, Qian, Feifei

arXiv.org Artificial Intelligence

Effective human-AI collaboration requires agents to adopt their roles and levels of support based on human needs, task requirements, and complexity. Traditional human-AI teaming often relies on a pre-determined robot communication scheme, restricting teamwork adaptability in complex tasks. Leveraging the strong communication capabilities of Large Language Models (LLMs), we propose a Human-Robot Teaming Framework with Multi-Modal Language feedback (HRT-ML), a framework designed to enhance human-robot interaction by adjusting the frequency and content of language-based feedback. The HRT-ML framework includes two core modules: a Coordinator for high-level, low-frequency strategic guidance and a Manager for task-specific, high-frequency instructions, enabling passive and active interactions with human teammates. To assess the impact of language feedback in collaborative scenarios, we conducted experiments in an enhanced Overcooked-AI game environment with varying levels of task complexity (easy, medium, hard) and feedback frequency (inactive, passive, active, superactive). Our results show that as task complexity increases relative to human capabilities, human teammates exhibited stronger preferences toward robotic agents that can offer frequent, proactive support. However, when task complexities exceed the LLM's capacity, noisy and inaccurate feedback from superactive agents can instead hinder team performance, as it requires human teammates to increase their effort to interpret and respond to the large amount of communications, with limited performance return. Our results offer a general principle for robotic agents to dynamically adjust their levels and frequencies of communication to work seamlessly with humans and achieve improved teaming performance.


How AI can make your next ER visit less stressful

FOX News

Emergency departments have been experiencing a significant strain for a long time, with visits only increasing. With the rise in flu, COVID-19 and RSV cases, in addition to an increase in psychiatric symptoms, emergency wait times can be agonizing. A place that is supposed to be known for its immediate access to care has instead become known for its stressful and harrowing delays. Even with the holiday season behind us – a time that is typically known for longer wait times – we're seeing a trend that these wait times are getting worse in general. For patients seeking urgent care, these delays can be excruciating, causing unnecessary anxiety and potentially worsening their health conditions.


A human-centric approach to adopting AI

MIT Technology Review

This episode is part of our "Building the future" podcast series. It's a multi-episode series focusing on how organizations, researchers, and innovators are meeting our evolving global challenges. We understand the importance of inclusive conversations and have chosen to highlight the work of women on the cutting edge of technological innovation, and business excellence. Researchers are similarly unlocking the value of AI through machine learning and robots that are developed to augment rather than replace human capabilities across manufacturing, health care, and space exploration. The robots of the past were kept in cages on factory floors and in labs, but this new era of AI-enabled robotics allows humans to work interdependently with robots to boost productivity, increase quality of work, and enable greater flexibility, says Julie Shah, professor in the department of aeronautics at MIT. Shah is also the co-lead of the Work of the Future Initiative at MIT. "Sometimes it can feel as though the emergence of these technologies is just going to sort of steamroll and work and jobs are going to change in some predetermined way because the technology now exists," says Shah. "But we know from the research that the data doesn't bear that out actually."


Survey of researchers and the public on attitudes toward BRAIN–AI convergence - eMedNews

#artificialintelligence

The BRAIN-AI initiative, a fusion of artificial intelligence and neuroscience research by the JST ERATO Ikegaya Brain-AI Fusion Project, has the potential to break through existing limitations on brain activity and expand human capabilities. Meanwhile, during the BRAIN-AI initiative, experts are expected to examine problems on ethical, legal, and social aspects, that must be discussed. These problems include the information handling in the human brain itself (the most private information available), the potential for increased inequality due to future technologies that enhance human capabilities, and philosophical issues surrounding identities such as who is responsible for decisions made by a brain–computer interface (BCI). To address these challenges, the BRAIN–AI HITE collaboration effort brings together BRAIN–AI researchers with researchers specializing in ethical, legal, and social implications (ELSI); responsible research and innovation (RRI); and philosophy. The goal is to ensure that BRAIN–AI research can bring true happiness (well-being) to humans by examining, from the earliest stages of research, the impact of this technology on people and society based on the in-depth knowledge of humanities and social sciences. As part of this effort, a team led by Associate Professor Ryuma Shineha and Specially Appointed Assistant Professor Shu Ishida at Osaka University's Research Center on Ethical, Legal, and Social Issues (ELSI) conducted a survey on attitudes toward brain science and brain information, targeting the general public (2,000 responses) and researchers in the field of neuroscience (108 responses).


Artificial Intelligence Training is the Future

#artificialintelligence

Artificial Intelligence (AI) is changing the world at an unprecedented pace, creating new opportunities and challenges for businesses and society as a whole. AI is transforming how we work, live, and interact with each other, and it is expected to impact the job market in the coming years significantly. Two emerging fields that are likely to play a critical role in shaping the future of AI are AI training and human-in-the-loop. In this article, we will explore why AI training and human-in-the-loop are the jobs of the future and their potential implications for the workforce. Firstly, let's define what is meant by AI training and human-in-the-loop.


The ChatGPT bot is causing panic now – but it'll soon be as mundane a tool as Excel John Naughton

The Guardian

So the ChatGPT language processing model burst upon an astonished world and the air was rent by squeals of delight and cries of outrage or lamentation. The delighted ones were those transfixed by discovering that a machine could apparently carry out a written commission competently. The outrage was triggered by fears of redundancy on the part of people whose employment requires the ability to write workmanlike prose. And the lamentations came from earnest folks (many of them teachers at various levels) whose day jobs involve grading essays hitherto written by students. If we know anything from history, it is that we generally overestimate the short-term impact of new communication technologies, while grossly underestimating their long-term implications.


I wrote this column myself, but how long before a chatbot could do it for me? John Naughton

The Guardian

Those who, like this columnist, spend too much time online will have noticed a kind of feeding frenzy over the past two weeks. The cause has been the release of an interesting chatbot – a software application capable of conducting an online conversation. The particular bot creating the fuss is ChatGPT, a prototype artificial intelligence (AI) chatbot that focuses on usability and dialogue and was developed by OpenAI, an AI research laboratory based in San Francisco. ChatGPT uses a large language model built via machine-learning methods and is based on OpenAI's GPT-3 model, which is capable of producing human-like text when given a prompt in natural language. It's an example of what has come to be called "generative AI": software that uses machine-learning algorithms to enable machines to generate artificial content – text, images, audio and video content based on its training data – in a way that might persuade a human user into believing that its outputs are "real".



Council Post: Is AI The New SaaS? Understanding The Role Of AI In The New Digital Age

#artificialintelligence

Champ Suthipongchai is a General Partner at Creative Ventures, a market-driven Deep Tech venture capital firm based in San Francisco. Soon AI will take over the world. There will be no jobs left for human beings. As a species, we will slowly realize that our existence is meaningless, staring off into the distance while an AI robot cooks our food and folds our laundry. Or at least that's the reality some people envision when they think of the rise of AI--and as general partner of a deep tech venture capital firm, I believe it's far off-base from what lies ahead.


Thinking fast and slow with AI

#artificialintelligence

Acclaimed economist Daniel Kahneman in his book Thinking Fast and Slow, proposed that the human brain has two separate systems for decision making – system 1 and system 2. System 1 controls unconscious decision making like walking, climbing stairs, brushing teeth, and other tasks that may not require conscious thought. On the other hand, system 2 is a slower and more deliberative type of decision-making system for tasks like playing chess, solving mathematical equations, etc. Now, what if we apply the same principle to artificial intelligence systems? Notably, this is not an entirely new concept. AI pioneer Yoshua Bengio spoke about it at length on multiple occasions.